Hessian Matrices of Penalty Functions for Solving Constrained-optimization Problems
نویسنده
چکیده
This paper deals with the Hessian matrices of penalty functions, evaluated at their minimizing point. It is concerned with the condition number of these matrices for small values of a controlling parameter. At the end of the paper a comparison is made between different types of penalty functions on the grounds of the results obtained. 1. Classification of penalty-function techniques Throughout this paper we shall be concerned with the problem minimize f(x) subject t~ } g,(x) ~ 0; 1 = I, ... ,m, where x denotes an element of the n-dimensional vector space En. We shall be assuming that the functions J, -gl> ... , -gm are convex and twice differentiable with continuous second-order partial derivatives on an open, convex subset V of En. The constraint set (1.1) R={xlg,(x);;:;::O; i=I, ... ,m} (1.2) is a bounded subset of V. The interior Ro of R is non-empty. We consider a number of penalty-function techniques for solving problem (1.1). One can distinguish two classes, both of which have been referred to by expressive names. The interior-point methods operate in the interior Ro of R. The penalty function is given by m Br(x) = f(x) r ~ cp[g,(x)], '=1 (1.3) where cp is a concave function of one variable, say y. Its derivative tp' reads cp'(y) = y-V (l.4) with a positive, integer '11. A point x(r) minimizing (1.3) over Ro 'exists then for "any r > 0, Any convergent sequence {x(rk)}' where {rk} is a monotonie, decreasing null sequence as k --* 00, converges to a solution of (1.1). The exterior-point methods or outside-in methods present an approach to a minimum solution from outside the constraint set. The' general form of the HESSIAN MATRICES OF PENALTY FUNCTIONS penalty function is given by m Lr(x) = f(x) r-1 ~ lp[g,(x)], '=1 where 11' is a concave function of one variable y, such that {° for y < 0, 1p(y) = co(y) for y::::;; 0. The derivative co' of co is given by co'(y) = (-y)V. Let z(r) denote a point minimizing (1.5) over En. Any convergent sequence {zerk)}, where {rk} denotes again a monotonie, decreasing null sequence, converges to a solution of (1.1). It will be convenient to extend the terminology that we have been using in previous papers. Following Murray S) we shall refer to interior-point penalty functions of the type (1.3) as barrier functions. The exterior-point penalty functions (1.5) will briefly be indicated as loss functions, a name which has also been used by Fiacco and McCormick 1). Furthermore, we introduce a classification based on the behaviour of the functions tp' and co' in a neighbourhood of y = O. A barrier function is said to be of order 'JI if the function tp' has a pole of order 'JI at y = o. Similarly, a loss function is of order 'JI if the function co' has a zero of order 'JI at y = o. 2. Conditioning An intriguing point is the choice of a penalty function for numerical purposes. We shall not repeat here all the arguments supporting the choice of the firstorder penalty functions for computational purposes. Our concern is an argument which has been introduced only by Murray S), namely the question of "conditioning". This is a qualification referring to the Hessian matrix of a penalty function. The motivation for such a study is the idea that failures of (second-order) unconstrained-minimization techniques may be due to illconditioning of the Hessian matrix at some iteration points. Throughout this paper it is tacitly assumed that penalty functions are strictly convex so that they have a unique minimizing point in their definition area. We shall primarily be concerned with the Hessian matrix of penalty functions at the minimizing point. In what follows we shall refer to it as the principal Hessian matrix. The reason will be clear. In a neighbourhood ofthe minimizing point a useful approximation of a penalty function is given by a quadratic function, with the principal Hessian matrix as the coefficient matrix of the quadratic term. It is therefore reasonable to assume that unconstrained mini323
منابع مشابه
Superlinearly convergent exact penalty projected structured Hessian updating schemes for constrained nonlinear least squares: asymptotic analysis
We present a structured algorithm for solving constrained nonlinear least squares problems, and establish its local two-step Q-superlinear convergence. The approach is based on an adaptive structured scheme due to Mahdavi-Amiri and Bartels of the exact penalty method of Coleman and Conn for nonlinearly constrained optimization problems. The structured adaptation also makes use of the ideas of N...
متن کاملAn efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems
Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...
متن کاملA BFGS-SQP method for nonsmooth, nonconvex, constrained optimization and its evaluation using relative minimization profiles
We propose an algorithm for solving nonsmooth, nonconvex, constrained optimization problems as well as a new set of visualization tools for comparing the performance of optimization algorithms. Our algorithm is a sequential quadratic optimization method that employs BFGS quasi-Newton Hessian approximations and an exact penalty function whose parameter is controlled using a steering strategy. We...
متن کاملConstrained Nonlinear Least Squares: A Superlinearly Convergent Projected Structured Secant Method
Numerical solution of nonlinear least-squares problems is an important computational task in science and engineering. Effective algorithms have been developed for solving nonlinear least squares problems. The structured secant method is a class of efficient methods developed in recent years for optimization problems in which the Hessian of the objective function has some special structure. A pr...
متن کاملMultiobjective Imperialist Competitive Evolutionary Algorithm for Solving Nonlinear Constrained Programming Problems
Nonlinear constrained programing problem (NCPP) has been arisen in diverse range of sciences such as portfolio, economic management etc.. In this paper, a multiobjective imperialist competitive evolutionary algorithm for solving NCPP is proposed. Firstly, we transform the NCPP into a biobjective optimization problem. Secondly, in order to improve the diversity of evolution country swarm, and he...
متن کامل